12 research outputs found

    Improving Communication in Scrum Teams

    Full text link
    Communication in teams is an important but difficult issue. In a Scrum development process, we use the Daily Scrum meetings to inform others about important problems, news and events in the project. When persons are absent due to holiday, illness or travel, they miss relevant information because there is no document that protocols the content of these meetings. We present a concept and a Twitter-like tool that improves communication in a Scrum development process. We take advantage out of the observation that many people do not like to create documentation but they do like to share what they did. We used the tool in industrial practice and observed an improvement in communication

    Individual characteristics of successful coding challengers

    Get PDF
    Assessing a software engineer's problem-solving ability to algorithmic programming tasks has been an essential part of technical interviews at some of the most successful technology companies for several years now. Despite the adoption of coding challenges among these companies, we do not know what influences the performance of different software engineers in solving such coding challenges. We conducted an exploratory study with software engineering students to find hypothesis on what individual characteristics make a good coding challenge solver. Our findings show that the better coding challengers have also better exam grades and more programming experience. Furthermore, conscientious as well as sad software engineers performed worse in our study

    Perception and Acceptance of an Autonomous Refactoring Bot

    Full text link
    The use of autonomous bots for automatic support in software development tasks is increasing. In the past, however, they were not always perceived positively and sometimes experienced a negative bias compared to their human counterparts. We conducted a qualitative study in which we deployed an autonomous refactoring bot for 41 days in a student software development project. In between and at the end, we conducted semi-structured interviews to find out how developers perceive the bot and whether they are more or less critical when reviewing the contributions of a bot compared to human contributions. Our findings show that the bot was perceived as a useful and unobtrusive contributor, and developers were no more critical of it than they were about their human colleagues, but only a few team members felt responsible for the bot.Comment: 8 pages, 2 figures. To be published at 12th International Conference on Agents and Artificial Intelligence (ICAART 2020

    Evidence Profiles for Validity Threats in Program Comprehension Experiments

    Full text link
    Searching for clues, gathering evidence, and reviewing case files are all techniques used by criminal investigators to draw sound conclusions and avoid wrongful convictions. Similarly, in software engineering (SE) research, we can develop sound methodologies and mitigate threats to validity by basing study design decisions on evidence. Echoing a recent call for the empirical evaluation of design decisions in program comprehension experiments, we conducted a 2-phases study consisting of systematic literature searches, snowballing, and thematic synthesis. We found out (1) which validity threat categories are most often discussed in primary studies of code comprehension, and we collected evidence to build (2) the evidence profiles for the three most commonly reported threats to validity. We discovered that few mentions of validity threats in primary studies (31 of 409) included a reference to supporting evidence. For the three most commonly mentioned threats, namely the influence of programming experience, program length, and the selected comprehension measures, almost all cited studies (17 of 18) did not meet our criteria for evidence. We show that for many threats to validity that are currently assumed to be influential across all studies, their actual impact may depend on the design and context of each specific study. Researchers should discuss threats to validity within the context of their particular study and support their discussions with evidence. The present paper can be one resource for evidence, and we call for more meta-studies of this type to be conducted, which will then inform design decisions in primary studies. Further, although we have applied our methodology in the context of program comprehension, our approach can also be used in other SE research areas to enable evidence-based experiment design decisions and meaningful discussions of threats to validity.Comment: 13 pages, 4 figures, 5 tables. To be published at ICSE 2023: Proceedings of the 45th IEEE/ACM International Conference on Software Engineerin

    Resist the Hype! Practical Recommendations to Cope With R\'esum\'e-Driven Development

    Full text link
    Technology trends play an important role in the hiring process for software and IT professionals. In a recent study of 591 software professionals in both hiring (130) and technical (558) roles, we found empirical support for a tendency to overemphasize technology trends in r\'esum\'es and the application process. 60% of the hiring professionals agreed that such trends would influence their job advertisements. Among the software professionals, 82% believed that using trending technologies in their daily work would make them more attractive for potential future employers. This phenomenon has previously been reported anecdotally and somewhat humorously under the label R\'esum\'e-Driven Development (RDD). Our article seeks to initiate a more serious debate about the consequences of RDD on software development practice. We explain how the phenomenon may constitute a harmful self-sustaining dynamic, and provide practical recommendations for both the hiring and applicant perspectives to change the current situation for the better.Comment: 8 pages, 4 figure

    A Fine-grained Data Set and Analysis of Tangling in Bug Fixing Commits

    Get PDF
    Context: Tangled commits are changes to software that address multiple concerns at once. For researchers interested in bugs, tangled commits mean that they actually study not only bugs, but also other concerns irrelevant for the study of bugs. Objective: We want to improve our understanding of the prevalence of tangling and the types of changes that are tangled within bug fixing commits. Methods: We use a crowd sourcing approach for manual labeling to validate which changes contribute to bug fixes for each line in bug fixing commits. Each line is labeled by four participants. If at least three participants agree on the same label, we have consensus. Results: We estimate that between 17% and 32% of all changes in bug fixing commits modify the source code to fix the underlying problem. However, when we only consider changes to the production code files this ratio increases to 66% to 87%. We find that about 11% of lines are hard to label leading to active disagreements between participants. Due to confirmed tangling and the uncertainty in our data, we estimate that 3% to 47% of data is noisy without manual untangling, depending on the use case. Conclusion: Tangled commits have a high prevalence in bug fixes and can lead to a large amount of noise in the data. Prior research indicates that this noise may alter results. As researchers, we should be skeptics and assume that unvalidated data is likely very noisy, until proven otherwise.Comment: Status: Accepted at Empirical Software Engineerin

    Towards an autonomous bot for automatic source code refactoring

    No full text
    Continuous refactoring is necessary to maintain source code quality and to cope with technical debt. Since manual refactoring is inefficient and error prone, various solutions for automated refactoring have been proposed in the past. However, empirical studies have shown that these solutions are not widely accepted by software developers and most refactorings are still performed manually. For example, developers reported that refactoring tools should support functionality for reviewing changes. They also criticized that introducing such tools would require substantial effort for configuration and integration into the current development environment. In this paper, we present our work towards the Refactoring-Bot, an autonomous bot that integrates into the team like a human developer via the existing version control platform. The bot automatically performs refactorings to resolve code smells and presents the changes to a developer for asynchronous review via pull requests. This way, developers are not interrupted in their workflow and can review the changes at any time with familiar tools. Proposed refactorings can then be integrated into the code base via the push of a button. We elaborate on our vision, discuss design decisions, describe the current state of development, and give an outlook on planned development and research activities

    Anchoring Code Understandability Evaluations Through Task Descriptions

    Full text link
    In code comprehension experiments, participants are usually told at the beginning what kind of code comprehension task to expect. Describing experiment scenarios and experimental tasks will influence participants in ways that are sometimes hard to predict and control. In particular, describing or even mentioning the difficulty of a code comprehension task might anchor participants and their perception of the task itself. In this study, we investigated in a randomized, controlled experiment with 256 participants (50 software professionals and 206 computer science students) whether a hint about the difficulty of the code to be understood in a task description anchors participants in their own code comprehensibility ratings. Subjective code evaluations are a commonly used measure for how well a developer in a code comprehension study understood code. Accordingly, it is important to understand how robust these measures are to cognitive biases such as the anchoring effect. Our results show that participants are significantly influenced by the initial scenario description in their assessment of code comprehensibility. An initial hint of hard to understand code leads participants to assess the code as harder to understand than participants who received no hint or a hint of easy to understand code. This affects students and professionals alike. We discuss examples of design decisions and contextual factors in the conduct of code comprehension experiments that can induce an anchoring effect, and recommend the use of more robust comprehension measures in code comprehension studies to enhance the validity of results.Comment: 8 pages, 2 figures. To appear in ICPC '22: IEEE/ACM International Conference on Program Comprehension, May 21-22, 2022, Pittsburgh, Pennsylvania, United State

    A theory on individual characteristics of successful coding challenge solvers

    Get PDF
    Background: Assessing a software engineer’s ability to solve algorithmic programming tasks has been an essential part of technical interviews at some of the most successful technology companies for several years now. We do not know to what extent individual characteristics, such as personality or programming experience, predict the performance in such tasks. Decision makers’ unawareness of possible predictor variables has the potential to bias hiring decisions which can result in expensive false negatives as well as in the unintended exclusion of software engineers with actually desirable characteristics. Methods: We conducted an exploratory quantitative study with 32 software engineering students to develop an empirical theory on which individual characteristics predict the performance in solving coding challenges. We developed our theory based on an established taxonomy framework by Gregor (2006). Results: Our findings show that the better coding challenge solvers also have better exam grades and more programming experience. Furthermore, conscientious as well as sad software engineers performed worse in our study. We make the theory available in this paper for empirical testing. Discussion: The theory raises awareness to the influence of individual characteristics on the outcome of technical interviews. Should the theory find empirical support in future studies, hiring costs could be reduced by selecting appropriate criteria for preselecting candidates for on-site interviews and potential bias in hiring decisions could be reduced by taking suitable measures
    corecore